TDCMR: Triplet-Based Deep Cross-Modal Retrieval for Geo-Multimedia Data
نویسندگان
چکیده
Mass multimedia data with geographical information (geo-multimedia) are collected and stored on the Internet due to wide application of location-based services (LBS). How find high-level semantic relationship between geo-multimedia construct efficient index is crucial for large-scale retrieval. To combat this challenge, paper proposes a deep cross-modal hashing framework retrieval, termed as Triplet-based Deep Cross-Modal Retrieval (TDCMR), which utilizes neural network an enhanced triplet constraint capture semantics. Besides, novel hybrid index, called TH-Quadtree, developed by combining binary hash codes quadtree support high-performance search. Extensive experiments conducted three common used benchmarks, results show superior performance proposed method.
منابع مشابه
Heterogeneous Metric Learning for Cross-Modal Multimedia Retrieval
Due to the massive explosion of multimedia content on the web, users demand a new type of information retrieval, called cross-modal multimedia retrieval where users submit queries of one media type and get results of various other media types. Performing effective retrieval of heterogeneous multimedia content brings new challenges. One essential aspect of these challenges is to learn a heteroge...
متن کاملPairwise Relationship Guided Deep Hashing for Cross-Modal Retrieval
With benefits of low storage cost and fast query speed, crossmodal hashing has received considerable attention recently. However, almost all existing methods on cross-modal hashing cannot obtain powerful hash codes due to directly utilizing hand-crafted features or ignoring heterogeneous correlations across different modalities, which will greatly degrade the retrieval performance. In this pape...
متن کاملCollective Deep Quantization for Efficient Cross-Modal Retrieval
Cross-modal similarity retrieval is a problem about designing a retrieval system that supports querying across content modalities, e.g., using an image to retrieve for texts. This paper presents a compact coding solution for efficient cross-modal retrieval, with a focus on the quantization approach which has already shown the superior performance over the hashing solutions in single-modal simil...
متن کاملLearning Deep Semantic Embeddings for Cross-Modal Retrieval
Deep learning methods have been actively researched for cross-modal retrieval, with the softmax cross-entropy loss commonly applied for supervised learning. However, the softmax cross-entropy loss is known to result in large intra-class variances, which is not not very suited for cross-modal matching. In this paper, a deep architecture called Deep Semantic Embedding (DSE) is proposed, which is ...
متن کاملHashGAN: Attention-aware Deep Adversarial Hashing for Cross Modal Retrieval
As the rapid growth of multi-modal data, hashing methods for cross-modal retrieval have received considerable attention. Deep-networks-based cross-modal hashing methods are appealing as they can integrate feature learning and hash coding into end-to-end trainable frameworks. However, it is still challenging to find content similarities between different modalities of data due to the heterogenei...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Applied sciences
سال: 2021
ISSN: ['2076-3417']
DOI: https://doi.org/10.3390/app112210803